perm filename NOTES.PAA[LSP,JRA]9 blob sn#126180 filedate 1974-10-22 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00012 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.SEC(Applications and Examples of LISP definitions,Applications and Examples)
C00008 00003	First let's write a predicate, %3member%* of two arguments, %3x%* and %3y%*.
C00021 00004	It %2is%* possible to write a directly recursive reversing function
C00024 00005	.SS(Examples of LISP applications)
C00027 00006	.SS(Differentiation,differentiation,P41:)
C00033 00007	Now we can complete the differentiation algorithm.  We know:
C00036 00008	.<<abstracting from rep>>
C00044 00009	.SS(Algebra of polynomials)
C00054 00010	.SS(Evaluation of polynomials,,P2:)
C00058 00011	.<<PROBLEMS ON POLY EVAL AND THE GREAT MOTHERS>>
C00059 00012	.<<PROVING PROPERTIES>>
C00065 ENDMK
C⊗;
.SEC(Applications and Examples of LISP definitions,Applications and Examples)
.BEGIN TURN ON "←";
.SS(Examples of Definitions)
We have already seen a few examples of definition and evaluation of
non-primitive LISP functions.  This section will spend a good deal of
time showing different styles of definition, giving hints about how to
write LISP functions, and increase your familiarity with LISP. Finally,
 several complete and non-trivial examples will be  described as LISP
functions.

The style of definition available in our current subset of LISP is
%2⊗→definition by recursion↔←%*. That is, a typical LISP definition
will contain applications of that function name being defined.
Consider the factorial function, n!. We usually characterize this function
by saying:

%21.%*  The function is defined for non-negative integers.

%22.%*  The value of the function for 0 is 1.

%23%*.  Otherwise the value of n! is n times the value of n-1!.

The implication here is that it is somehow easier to compute n-1! than
to compute n!. 

Recall also our example of %3equal%* on {yon(P1)}. The question
of equality for two non-atomic sexprs was thrown back to the
question of equality for their %3car%*s and %3cdr%*s. 

These examples are typical of LISP definitions.
The body of the definiton is a conditional expression
first involving a few special cases, called %2⊗→termination conditions↔←%*.
Then, the remainder of the conditional covers the general case-- what to
do if the argument to the function is not one of the special cases.
The termination case for the factorial should be: "is the argument 0";
termination for %3equal%* involves: "are either of the arguments atomic?".

It should be clear how to write a definition of the factorial function:

.BEGIN CENTERIT;SELECT 3;
.P20:
←fact[n] <= [eq[n;0] → 1; T → times[n;fact[n-1]]]

.END
Notice that %3fact%* is a partial function, defined only for non-negative
integer. %3equal%* is a total predicate; it is defined for all arguments.

When writing or reading LISP definitions pay particular attention to the
domain of definition. The following general hints should be useful:

%21%*.  Is it a function or predicate?

%22%*.  Are there any restrictions on the argument positions?

%23%*.  Are the termination conditions compatible with the restrictions
on the arguments?  If it is a recursion on lists, check for the empty
list; if it is a recursion of arbitrary sexprs, then check for the
appearance of an atom.

%24%*.  Whenever a function call is made within the definition are all
the restrictions on that function satisfied?

%25%*.  Don't try to do too much. Be lazy. There is usually a very simple
termination case, and if the gereral case looks messy, then write some 
subfunctions.  If the termination case looks messy there is probably
something wrong.

Here are some more examples of LISP definitions.


First let's write a predicate, %3member%* of two arguments, %3x%* and %3y%*.
%3x%* is to be atomic; %3y%* is to be a list; %3member%* is to return
%3T%* just in the case that %3x%* is  an element of the list %3y%*.

So the predicate is partial, the recursion should be on the structure of %3y%*,
and termination (with value %3NIL%*) should occur if %3y%* is the empty list. If %3y%* is not
empty then it has a first element, call it %3z%*; compare %3z%* with %3x%*.
If these elements are identical then  %3member%* should return %3T%*; otherwise
see if %3x%* occurs in the remainder of the list %3y%*.

Notes:

%21.%*  We cannot use %3eq%* to check equality since, though %3x%* is atomic, there is no
reason to believe that the elements of %3y%* need be atomic.

%22.%*  We can get the first element of a list with %3car%*, and the rest of a list
with %3cdr%*.

So here's %3member%*:
.BEGIN TABIT2(16,32);SELECT 3;
\member[x;y] <=\[null[y] → NIL;
\\ equal[car[y];x] → T;
\\ T → member[x;cdr[y]]]

.END

Let us now consider a slightly more complicated numerical example, the 
⊗→fibonacci sequence↔←: 0, 1, 1, 2, 3, 5, 8, ... . This sequence is frequently
characterized by the following recurrence relation:

.BEGIN CENTERIT;
.GROUP
←f(0) = 0
←f(1) = 1
←f(n) = f(n-1)+f(n-2);
.APART
.END

.GROUP
The translation to a LISP function is easy:

.BEGIN TABIT2(16,26);SELECT 3;
\fib[n] <=\[eq[n;0] → 0;
\\ eq[n;1] → 1;
\\ T → plus[fib[n-1];fib[n-2]]]
.END

where %3plus%* is a representation of the mathematical function, %3+%*.
.APART

A few points can be made here. First, notice that the intuitive evaluation
scheme requires many duplications of computation.  For example,
computation of %3fib[5]%* requires the computation of %3fib[4]%* and %3fib[3]%*.
But within the calculation of %3fib[4]%* we must again calculate %3fib[3]%*,
etc.  It would be nice if we could restructure the definition of the function,
%3fib%* to stop this duplication of calculation. Since we %2do%* wish to run
programs on a machine we should give some attention to efficiency.
To those with programming experience, the solution is easy: assign the
partial computations to temporary variables.  The problem here is that
our current subset of LISP doesn't contain assignment. There is however
a very useful trick which we can use.

We will define another function, called %3fib%8/%1, on three variables %3x%*,
%3y%*, and %3n%*. The variables, %3x%* and %3y%*, will be used to carry
the partial computations. Consider:

.BEGIN TABIT2(17,34);SELECT 3;CENTERIT;
.GROUP
←fib%41%*[n] <= fib%8/%*[n;0;1]

\fib%8/%*[n;x;y] <=\[eq[n;0] → x;
\\ T → fib%8/%*[n-1;plus[x;y];x]]
.APART
.END

This example is complicated enough to warrant examination. The initial call,
%3fib%41%*[n]%1, has the effect of calling %3fib%8/%1 with %3x%* initialized to
%30%* and with %3y%* initialized to %31%*. The calls on %3fib%8/%1 within the body
of the definition, say the i%8th%* such recursive call, has the effect
of saving the i%8th%* fibonacci number in %3x%* and the i-1%8st%* in %3y%*.
For example:

.BEGIN TABIT2(25,37);SELECT 3;
\fib%41%*[4] \= fib%8/%*[4;0;1]
\\= fib%8/%*[3;1;0]
\\= fib%8/%*[2;1;1]
\\= fib%8/%*[1;2;1]
\\= fib%8/%*[0;3;2]
\\= 3
.END

This same trick of using ⊗→auxiliary function↔←s can be applied to the
factorial example. The resulting definition will be more efficient, though
the gain in efficiency is not as transparent  as that in the fibonacci
example. Thus:

.BEGIN TABIT1(10);SELECT 3;GROUP;CENTERIT;
.P21:
←fact%41%*[n] <= fac%8/%*[n;1];

←fac%8/%*[n;x] <= [eq[n;0] → x; T → fac%8/%*[n-1;times[n;x]]]
.APART
.END

It is clear in these examples that the functions %3fact, fact%41%1 and
%3fib, fib%41%1 are equivalent. Perhaps we should prove that this is so.
We shall examine the question of proofs of equivalence in {yonss(P15)}.

The trick of auxiliary functions is clearly applicable to LISP functions
defined over sexprs:

.BEGIN CENTERIT;SELECT 3;group;
.P19:
←length[n] <= [null[n] → 0; T → plus[1;length[cdr[n]]]]

←length%41%*[n] <= length%8/%*[n;0]

←length%8/%*[n;x] <= [null[n] → x; T → length%8/%*[cdr[n];plus[x;1]]]
%1and it seems apparent that:
%3←length[n] ≡ length%41%*[n]
.APART
.END

So far our examples have either been numerical or have been predicates.
Predicates only require traversing existing sexprs; certainly  we will
want to write algorithms to build new sexprs.  Consider:

.BEGIN CENTERIT;SELECT 3;GROUP;
.P97:

←reverse[x] <= rev%8/%*[x;NIL]

←rev%8/%*[x;y] <= [null[x] → y; T → rev%8/%*[cdr[x];cons[car[x];y]]]
.APART
.END

The function, ⊗→%3reverse%*↔←, builds a list which is the reverse of its
argument.  Notice that this definition uses an auxiliary function.  
.P16:
Sometimes it is more natural to express algorithms this way. We will
see a "direct" definition of the reversing function in a moment.

This %3reverse%* function builds up the new list in a very straightforward mannner, %3cons%*ing
the elements onto the second argument of %3rev%8/%*%1. Since %3y%* was initialized
to %3NIL%* we are assured that the resulting construct will be a list.

Construction is usually not quite so straightforward. Suppose we wish to
define a LISP function named ⊗→%3append%*↔← of two list arguments, %3x%* and %3y%*,
which is to return a new list which has %3x%* appended onto the front of %3y%*.
For example:

.BEGIN CENTERIT;SELECT 3; GROUP;

←append[(A B D);(C E)] = (A B D C E)
←append[A;(B C)] %1 is undefined%*. %3A%1 is not a list.
←%3append[(A B C);NIL] = append[NIL;(A B C)] = (A B C)
.APART
.END

So %3append%* is a partial function. It should be defined by recursion,
but recursion on which argument?  Well, if either argument is %3NIL%*
then the value given by %3append%* is the other argument. 
The next simplest case is a one-element
list; if exactly one of %3x%* or %3y%* is a singleton how does that help us
discover the recurrence relation for appending? It doesn't help much if %3y%*
is a singleton; but if %3x%* is, then %3append%* could give:

←%3cons[car[x];y]%* as result.

So recursion on %3x%* is likely. The definition follows easily now.

.P22:
←%3append[x;y] <= [null[x] → y; T → cons[car[x];append[cdr[x];y]]].%*

Notice that the construction of the result is a bit more obscure than
that involved in %3reverse%*. The construction has to "wait" until
we have seen the end of the list %3x%*. For example:

.BEGIN TABIT2(10,40);SELECT 3;GROUP;
\append[(A B C);(D E F)] \= cons[A;append[(B C);(D E F)]]
\\= cons[A;cons[B;append[(C);(D E F)]]]
\\= cons[A;cons[B;cons[C;append[NIL;(D E F)]]]]
\\= cons[A;cons[B;cons[C;(D E F)]]]
\\= cons[A;cons[B;(C D E F)]]
\\= cons[A;(B C D E F)]
\\= (A B C D E F)
.APART
.END

We are assured  of constructing a list here because %3y%* is a list
and we are %3cons%*ing onto the front of it. LISP functions
which are to construct list-output by %3cons%*ing %2must%*  %3cons%* onto  
the front of an existing %2list%*. That list may be either
non-empty or the empty list, %3NIL%*. 
This is why the termination conditions on such list-constructing functions
return %3NIL%*.
Examine %3reverse%* on {yon(P16)} or the following binary function, %3dotem%*.
The arguments to %3dotem%* are both lists assumed to contain the same
number of elements. The value returned is to be a list of dotted pairs; the 
elements of the pairs are the corresponding elements of the input lists. Thus:

.BEGIN SELECT 3;TABIT1(16);

dotem[x;y] <= [\null[x] → NIL;
\T → cons[cons[car[x];car[y]];dotem[cdr[x];cdr[y]]]]]

.END

Now, as promised on {yon(P16)}, here is a "direct" definition of %3reverse%*.

.BEGIN SELECT 3;TABIT1(13);GROUP;

.P23:
reverse[x] <=\[null[x] → NIL;
\ T → append[reverse[cdr[x]];cons[car[x];NIL]]] 
.END

This reversing function is not as efficient as the previous one.

.end

It %2is%* possible to write a directly recursive reversing function
with no auxiliary functions, no functions other than the primitives, and
no efficiency. We shall do so simply because it is a good example of
the process of discovering the general case of the recursion by careful
consideration of examples. Let us call the function %3rev%*.

Let's worry about the termination conditions later. Consider, for example,
%3rev[(A#B#C#D)]%*. This should evaluate to %3(D#C#B#A)%*. How can we
construct this list by recursive calls on %3rev%*? 
In the following, assume %3x%* is bound to %3(A#B#C#D)%*.
Now note that %3(D#C#B#A)%* is the
value of %3cons[D;(C#B#A)]%*. Then %3D%* is %3car[rev[cdr[x]]]%* (it is also
%3car[rev[x]]%* but that would not help us).
How can we get %3(C#B#A)%*?## Well:
.BEGIN TABIT2(21,30);GROUP;SELECT 3;

\(C B A)\= rev[(A B C)]
\\= rev[cons[A;(B C)]]   %1(we are going after %3cdr[x]%* again)%3
\\                        %1but first  we can get %3A%* from %3x.
\\= rev[cons[car[x];(B C)]]
\\= rev[cons[car[x];rev[(C B)]]]
\\= rev[cons[car[x];rev[cdr[(D C B)]]]]
\\= rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]
.END

.BEGIN SELECT 3;CENTERIT;

%1Finally%*
←rev[x] = cons[car[rev[cdr[x]]];rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]]
.END

%1
The termination conditions are simple. First %3rev[NIL]%* gives %3NIL%*.
Then notice that the general case which we just constructed has %2two%*
%3cons%*es.  That means the shortest list which it can make is of length two.
So lists of length one are handled separately: the reverse of such a list
is itself.
Thus the complete definition should be:

.BEGIN SELECT 3;GROUP;TABIT1(13);

rev[x] <= [\null[x] → NIL;
\null[cdr[x]] → x;
\T → cons[car[rev[cdr[x]]];rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]]]
.END

.REQUIRE "PROB5.PUB" SOURCE_FILE;

.SS(Examples of LISP applications)
.BEGIN TURN ON "←";

The crucial problems in data structures are the interrelationships 
between algorithms and the representation which is
chosen.  Once these choices have been made then the rest is easy.
Knuth once remarked that simulation languages should not have
any I/O statements, because if you got to the point of running
on a machine then you have done something wrong.  All your ideas
should have been sorted out as you were constructing the model
and describing the algorithms.  He wasn't quite serious but the
point should be clear.

The next few sections will examine some non-trivial problems
in non-numerical computation (data structures).  We will describe
the problem intuitively, pick an initial representation for the
problem, write the LISP algorithm, and in some cases "tune" the
algorithm by picking "more efficient" data representations.
The examples share other important characteristics: 


%21.%*  We take our intuitive algorithm and try to express it as a LISP function.

%22.%*  We take the intuitive domain and encode its elements as sexprs.

%23.%*  %21%* and %22%* usually happen in parallel and are perhaps iterated.

%24.%*  We evaluate the LISP function on a representation of a problem.

%25.%*  We interpret the sexpression output as an answer to our problem.

Pictorially:

.BEGIN GROUP;TABIT1(35);
.P46:

intuitive algorithm => LISP function\|
\|  evaluation
\|==============> interpret sexpr output as answer
\|
domain of algorithm => sexpressions\|

.END

This process is certainly not unique to LISP. Every language does it.
.SS(Differentiation,differentiation,P41:)

This example will describe a rudimentary differentiation
routine for polynomials in several variables.  First you should
realize that the normal definition of differentiation is
recursive!  The question of differentiation of a sum is thrown
back on the differentiation of each summand.  Similar relationships 
hold for products, differences, and powers.  As with all
good recursive definitions, there must be some termination
conditions.  What are the termination conditions here?  
Differentiation of a variable, say %3x%*, with respect to %3x%* is defined to be 1;
differentiating a constant or a variable not equal to %3x%* with
respect to %3x%* gives a result of zero.  It is easy to write recursive
algorithms in LISP; the only problem is that the domain (and
range) of LISP functions is Sexprs, not the polynomials which
we need.  So we need a way to represent arbitrary polynomials as Sexprs.
Though polynomials can be arbitrarily complex, involving plus,
times, minus, powers, etc. their general format is very simple
if they are described in prefix notation (standard 
 functional notation) and we assume that +, *, - and ** are
binary operations.
For example:
.GROUP SKIP 1;
.BEGIN TABIT2(30,45);
.GROUP
\%2infix\prefix
%3

\x*z+2y\+[*[x,z], *[2,y]]
\x*y*z\*[x,*[y,z]]
%1
.APART
.END

(optional problem:  write BNF equations expressing such a
representation of polynomials)

How does this help us?  We still don't have Sexprs.  First,
the operations need to be represented as atoms:

.BEGIN TABIT2(30,45);group;
\%2operation\representation
%3

\+\PLUS
\-\MINUS
\*\TIMES
\**\EXPT
%1
.END
How about a representation of variables and constants?  Let
the variables be mapped to their uppercase counterpart,which is a LISP atom.
Let constants (numerals), be just the numerals;
also restectable LISP atoms.  Looking ahead to the algorithm, the termination 
condition on variables and constants can then just be
given in terms of the predicate, %3atom%*.

.GROUP
That is,if %3u%* is a constant or a variable then:

.BEGIN TABIT2(10,20);
\D%3u%*/D%3x%* = 1\if %3x = u
\\%10 otherwise
.APART

.GROUP
will translate as :

\%3[atom[u] → [eq[u;x] → 1; T → 0] ....%*
.END
.APART

Now finally, how can we represent an arbitrary polynomial as an sexpr.

Write:
.BEGIN TABIT2(30,45);
%1
\+[x,y]\as %3(PLUS X Y)%*
\*[x,y]\as %3(TIMES X Y)%*
\**[x,y]\as %3(EXPT X Y)%*
%1
or in general:
%3
←f[t%41%*,t%42%*]    %1as %3(F%*, translate of %3t%41%1, translate of %3t%42%3)
%1

.GROUP
For example:
%3

←x%82%* + 2yz + u
%1
.APART
will be translated to the following prefix notation:

%3
←+[**[x,2], +[*[2,*[y,z]], u]]
.FILL
%1

(This is messier than it really needs to be because we required
+ and * to be binary).  From this it`s easy to get the sexpr
form:
.NOFILL
%3
←(PLUS (EXPT X 2)  (PLUS (TIMES 2 (TIMES Y Z)) U))
%1
.FILL
    



Now we can complete the differentiation algorithm.  We know:

%1
.BEGIN TABIT3(10,28,39);
\D[%3f + g%*]/D%3x%* = D%3f/%*D%3x + %*D%3g%*/D%3x.
%1

We would see %3u = f + g%* as %3u = (PLUS, %*rep of %3f%*, rep of %3g)%*
Where:\%3cadr[u] = %*rep of %3f
\caddr[u] = %*rep of %3g%*.        ⊗↓We have done a reasonably evil thing here.
We have tied the algorithm for symbolic differentiation to a specific 
representation for polynomials. It is useful to use a specific representation
for the moment and repent later. In particular, see {yon(P74)}, {yon(P60)}
and {yon(P73)}.←


.FILL
The result of differentiation %3u%* is to be a new list of three
elements:
.NOFILL

\1.  the symbol %3PLUS%*.
\2.  the effect of %3diff%* operating on the rep. of %3f%*.
\3.  the effect of %3diff%* operating on the rep. of %3g%*.

Thus another part of the algorithm:

%3
\eq [car[u];PLUS] →\list [PLUS; diff[cadr[u];x];diff[caddr[u];x]]
%1.

D[%3f - g]/%*D%3x%* is very similar.

D[%3f*g]%*/D%3x%* is defined to be %3f* %1D%3g%*/D%3x + g *%1D%*f/%1D%*x%1.


So here's another part of %3diff%*:
.GROUP
%3
\eq[car[u];TIMES] →\list[PLUS; 
\\     list[TIMES; cadr[u];diff [caddr[u];x]];
\\     list[TIMES;caddr[u];diff [cadr[u];x]]]
%1
.APART

Here`s an example. We know:

←D[%3x*y + x]%*/D%3x = y + 1%*

.END
.GROUP
Try:
%3
.BEGIN TABIT3(10,17,23);
diff [(PLUS (TIMES X Y) X); X]
\=list [PLUS; diff[TIMES X Y) X]; diff [X;X]]
\=list [PLUS; 
\\list [PLUS; 
\\\list [TIMES; X; diff [Y;X]];
\\\list [TIMES; Y; diff [X;X]]];
\\diff [X;X]]

\=list [PLUS;
\\list [PLUS;
\\\list [TIMES; X ;0];
\\\list [TIMES; Y;1]];
\\1 ]

.APART
\=(PLUS  (PLUS (TIMES X 0) (TIMES Y 1)) 1)

.END
%1
which can be interpreted as:

%3
←x*0 + y*1 + 1 .
%1

Which introduces another set of non-numerical algorithms:
algebraic simplification!


.END

.<<abstracting from rep>>
.BEGIN CENTERIT;SELECT 2;
←Points to note.
.END

This problem of representation is typical of data structure
algorithms (regardless of what language you use).  That is,
once you have decided what the intuitive problem is, pick a
representation which makes your algorithms clean (then worry
about efficiency).  In the next set of examples we will see a
 series of representation each becoming more and more "efficient"
and each requiring more "knowledge" being built into the algorithm.

Here's the whole algorithm for differentiation using + and *.
%3
.BEGIN TABIT3(10,35,41);
.GROUP

diff[u;x] <=
\[atom [u] → [eq [x,u] → 1; T → 0];
\ eq [car [u]; PLUS] → list\[PLUS; 
\\ diff [cadr [u]; x];
\\ diff [caddr [u]; x]]
\ eq [car [u]; TIMES] → list\[PLUS; 
\\ list\[TIMES; 
\\\ cadr [u];
\\\ diff [caddr [u]; x]];
\\ list\[TIMES; 
\\\ caddr [u];
\\\ diff [cadr [u]; x]];
\ T → LOSER]
.APART
.END
.select 1;
.P74:

As we mentioned earlier, the current manifestation of %3diff%* encodes too
much of our particular representation for polynomials. The separation
of algorithm from representation is beneficial from at least two standpoints.
First, changing representation should have a minimal effect on the structure
of the algorithm. %3diff%* %2knows%* that variables are represented as atoms
and a sum is repesented as a list whose %3car%*-part is %3PLUS%*.
Second, readability of the algorithm suffers greatly.

How much of %3diff%* really needs to know about the representation and
how can we improve the readability of %3diff%*?

First the %3car-cdr%* chains are not particularly mnemonic. We used
%3cadr%* to get the first argument to a sum or product and used %3caddr%*
to get the second.
We used %3car%* to extract the operator.

Let's define:

.BEGIN CENTERIT;SELECT 3;group;
←op[x] <= car[x]
←arg%41%*[x] <= cadr[x]
←arg%42%*[x] <= caddr[x]
.end

Then %3diff%* becomes:

.BEGIN TABIT3(10,35,41);select 3;
.GROUP

diff[u;x] <=
\[atom [u] → [eq [x,u] → 1; T → 0];
\ eq [op[u]; PLUS] → list\[PLUS; 
\\ diff [arg%41%* [u]; x];
\\ diff [arg%42%* [u]; x]]
\ eq [op[u]; TIMES] → list\[PLUS; 
\\ list\[TIMES; 
\\\ arg%41%* [u];
\\\ diff [arg%42%* [u]; x]];
\\ list\[TIMES; 
\\\ arg%42%* [u];
\\\ diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Still, there is much of the representation present. Recognition of variables
and other terms can be abstracted from the representation. We need only
recognize when a term is a sum, a product, a variable or a constant.
In terms of the current  representation we could define such recognizers or
predicates as:

.BEGIN CENTERIT; SELECT 3;group;

←issum[x] <= eq[op[x];PLUS]
←isprod[x] <= eq[op[x];TIMES]
←isconst[x] <= numberp[x]
←isvar[x] <= [atom[x] → not[isconst[x]]; T → NIL]
←samevar[x;y] <= eq[x;y]

.END
Using these predicates we can rewrite %3diff%* as:

.BEGIN TABIT3(10,27,33);select 3;
.GROUP

diff[u;x] <=
\[isvar[u] → [samevar[x,u] → 1; T → 0];
\ isconst[u] → 0;
\ issum[u] → list\[PLUS; 
\\ diff [arg%41%* [u]; x];
\\ diff [arg%42%* [u]; x]]
\ isprod[u] → list\[PLUS; 
\\ list\[TIMES; 
\\\ arg%41%* [u];
\\\ diff [arg%42%* [u]; x]];
\\ list\[TIMES; 
\\\ arg%42%* [u];
\\\ diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Readability is certainly improving, but the representation is still
known to %3diff%*.
When we build the result of the sum or product of derivatives we use
knowledge of the representation.
Rather it would be better to define:
.BEGIN CENTERIT;SELECT 3;group;

←makesum[x;y] <= list[PLUS;x;y]
←makeprod[x;y] <= list[TIMES;x;y]
.END
Then the new %3diff%* is:

.BEGIN TABIT2(10,31);select 3;
.GROUP

diff[u;x] <=
\[isvar[u] → [samevar[x,u] → 1; T → 0];
\ isconst[u] → 0;
\ issum[u] → makesum[diff [arg%41%* [u]; x];diff[arg%42%* [u]; x]]
\ isprod[u] → makesum[\makeprod[arg%41%* [u];diff[arg%42%* [u]; x]];
\\ makeprod[arg%42%* [u];diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Notice that %3diff%* is much more readable now and more importantly, the details
of the representation have been relegated to subfunctions. Changing
representation simply requires supplying different subfunctions. No changes
need be made to %3diff%*. There has only been a slight decrease in efficiency.
The termination condition in the original %3diff%* is a bit more succinct.
Looking back, we first abstracted the selector 
functions, those which selected components; then we abstracted the recognizers,
the predicates telling which term was present; then we modified
the constructors, the functions which make new terms.
These three components of programming: selectors, recognizers, and constructors
will appear again on {yon( P75)} in discussing McCarthy's abstract syntax.
.SS(Algebra of polynomials)
.BEGIN TURN ON "#";

%1
Assume we want to perform addition and multiplication
of polynomials and assume that each polynomial is of the
form %3p%41%* + p%42%* + ... + p%4n%1 where each %3p%4i%1 is a product of variables
and constants.  The standard algorithm for addition of
%9S%8n%4i=1%3p%4i%1 with   %9S%8m%4j=1%3q%4j%1 says you can combine a
%3q%4j%1 with a %3p%4i%1 if the variable parts of these terms 
are identical.  In this
case the resulting term has the same variable part but has a
constant part equal to the sum of the constant parts of %3p%4i%1
and %3q%4j%1.
For example if %3p%4i%1 is %32x%* and %3q%4j%1 is %33x%* the sum term is %35x%*.

.BEGIN GROUP;
←First representation:

We could use the representation of the differentiation
example.  But since the form of polynomials we are using is more restricted than
the previous form, that
representation is more complex than necessary.
.END

.BEGIN GROUP;
←Second representation:

Using our knowledge of the forms of polynomials, write
%9S %3p%4i%1 as:

←%3( (%1rep of %3p%41%*), (%1rep of %3p%42%*) ...)%1

and each %3p%4i%1 is represented as some list of its components
(constants and variables).
.END
Is this representation sufficient?  Does it have the 
flexibility we need?  How do we represent the test for  equal variable parts 
of %3p%4i%1 and %3q%4j%1?  %3equal%* does it.  If they are equal then we build a
new list representing the same variable part but with a constant
part equal to the sum.

.BEGIN GROUP
←Third representation:

%3equal%* is expensive.  Can we do better?  That is can we
represent some term like %32*x%82%3 *y%83%** z%1 better than say

←%3(2 (EXPT X 2) (EXPT Y 3) (EXPT Z 1))%*?
.END

First, %3EXPT%* is really unnecessary; the list:
←%3(2 (X 2) (Y 3) (Z 1))%*, carries the same information.

Now  if we assume that we know how many variables can ever
appear and then assume that those variables always appear
in the same order in a term then we could write the above as:

←%3(2 2 3 1)%*   assuming %3x, y, z%*
are the only variables.

Let`s stop for some examples.
.BEGIN NOFILL;TURN ON "\";TABS 30,45;

\%2term\representation
\%32xyz\(2 1 1 1)
\2x%82%*z\(2 2 0 1)
\4z%83%*\(4 0 0 3)
.END

Now let`s think about the structure of the main algorithm.
The algorithm for the sum must compare terms finding like terms
generated a new term, otherwise simply copy the terms.

.BEGIN GROUP;
←Fourth representation:

When we pick a %3p%4i%1 from the first polynomial we would
like to find the corresponding %3q%4j%1 with the minimum amount of
searching.  
.END
This can be accomplished if we can order the terms
in the polynomials. So we will assume that there is a maximum
number of digits in the exponents of any term.  For sake of
argument, assume 2 digits; then we will represent terms as
follows:
.GROUP
.BEGIN NOFILL;TABS 30,45;TURN ON "\";

\%2term\representation

\%32x*y*z\(2, 010101)
\2x%82%*z\(2, 020001)
\4z%83%*\(4, 000003)
.END
.APART
%1

Now we can order on the numeric representation of the variable
part of the term.  One more simplification:

←represent %3 ax%8A%**y%8B%**z%8C%1 as %3(a . ABC).

%1
Finally we will write the algorithm.  We will assume that
the polynomials are initially ordered and will write the 
algorithm so as to maintain that ordering.
Each term is a dotted pair of elements: the coefficient
and a representation of the exponent.

.P60:
As in the previous differentiation example, we should extract the algorithm
from the representation.
We shall define:

←%3coef[x] <= car[x]%1 and %3 expo[x] <= cdr[x].
%1

To test the ordering we will use the LISP predicate:

←%3greaterp[x;y] = x>y.
%1

In the construction of the `sum'
polynomial we will generate new terms by combining coefficients.
So a function is needed to reflect this construction. 
%3node[x;y]%1  where %3x%* is a representation of the coefficient and
%3y%* is a representation of the exponent is thus defined as:

←%3node[x;y] <= cons[x;y].%1

.GROUP
So here's a representation of a polynomial:

←%3x%82%* - 2y - z %1(assuming max exponent of 9.)

%3
.BEGIN NOFILL;



					%1note:%* 200>010>001
  1   200 


         -2    010


                  -1    001       NIL

.APART

.GROUP
%1
Here's the algorithm:
%3
.BEGIN NOFILL;TABS 3,10,35;TURN ON "\";
polyadd[p;q] <=
\[null[p] →q; null[q] → p;
\ eq[expo[car[p]];expo[car[q]]] →\[zerop[plus[coef[car[p]];coef[car[q]]]]
\\\    → polyadd[cdr[p];cdr[q]]];
\\\ T → cons[node[plus[coef[car[p]];coef[car[q]]];
\\\             expo[car[p]]];polyadd[cdr[p];cdr[q]]]];
\ greaterp[expo[car[p]];expo[car[q]]] → cons[car[p];polyadd[cdr[p];q]];
\ T → cons[car[q];polyadd[p;cdr[q]]]]
.END
%1
.APART

.GROUP
Now for an explanation and example:

First we used some new LISP functions:

←%3plus[x;y] = x + y
←zerop[x]  <= eq[x;0]
.APART
%1
.P72:

%3polyadd%* is of the form: %3[p%41%* → e%41%*; p%42%* → e%42%*; p%43%* → e%43%*; p%44%* → e%44%*; p%45%* → e%45%*]

.BEGIN SELECT 1;INDENT 0,10;FILL;
Case i: %3p%41%3 → e%41%1 and %3p%42%* → e%42%1. If either polynomial is empty return the other.

Case ii: %3p%43%* → e%43%1.  If the exponents are equal then we can think about combining
terms.  However, we must check for cancellations and not include any 
such terms in our resultant polynomial.

Case iii: %3p%44%* → e%44%1 and %3p%45%* → e%45%1. Theses sections worry about the ordering of
terms so that the resultant polynomial retains the ordering.
.END

%1
Here's an informal execution of %3polyadd:
.GROUP

.BEGIN NOFILL;TABS 10;TURN ON "\";
polyadd[x+y+z;x%82%*-2y-z]
\= cons[x%82%*;polyadd[x+y+z;-2y-z]]
\= cons[x%82%*;cons[x;polyadd[y+z;-2y-z]]]
\= cons[x%82%*;cons[x;cons[node[1+-2;y];polyadd[z;-z]]]]
\= cons[x%82%*;cons[x;cons[-y;polyadd[z;-z]]]]
\= cons[x%82%*;cons[x;cons[-y;polyadd[NIL;NIL]]]]
\= cons[x%82%*;cons[x;cons[-y;NIL]]]
←= x%82%*+x-y
.END
.APART
.END
.END
.END
.SS(Evaluation of polynomials,,P2:)
.BEGIN TURN ON "←";
Given an arbitrary polynomial, and values for any of the variables which it
contains, we would like to compute its value. For this section we will
assume that  the substitutions of values for variables has already been
carried out. Thus we are dealing with polynomials of the form: %9S%8n%4i=1%3p%4i%1
where %3p%4i%1 is a product of powers of constants. For example:

←%32%83%* + 3*4%82%* + 5.%1  This could be represented as:

←%3(PLUS (EXPT 2 3)(PLUS (TIMES 3(EXPT 4 2)) 5)).%*

We will now describe a LISP function, %3value%*, which will take such an
sexpr representation and compute its value. First, input to %3value%* will be
numbers or lists beginning with either %3PLUS, TIMES,%* or %3EXPT%*.
The value of a number is that number; to evaluate the other forms of input
we should perform the operation represented. We must therefore assume
that operations of plus, times and exponentiation exits.  Assume they are
named +, *, and ↑, resp. What then should be the value of a list whose %3car%*-part
is %3PLUS%*? It should be the result of adding the value of the %3cadr%* of the
list to the value of the %3caddr%* of the list. That is %3value%* is  clearly
recursive.
  To test for the occurrence of a number we shall assume
a unary LISP predicate called %3numberp%* which returns %3T%* just in the case
that its argument is a number.

It should now be clear how to write %3value%*:

.BEGIN TABIT1(10);SELECT 3;
value[x] <=\[numberp[x] → x;
\ eq[car[x] PLUS] → +[value[cadr[x]];value[caddr[x]]];
\ eq[car[x] TIMES] → *[value[cadr[x]];value[caddr[x]]];
\ eq[car[x] EXPT] → ↑[value[cadr[x]];value[caddr[x]]]]

.END
Or more abstractly:

.BEGIN TABIT1(10);SELECT 3;
value[x] <=\[isnumber[x] → x;
\ issum[x] → +[value[arg%41%*[x]];value[arg%42%*[x]]];
\ isprod[x] → *[value[arg%41%*[x]];value[arg%42%*[x]]];
\ isexpt[x] → ↑[value[arg%41%*[x]];value[arg%42%*[x]]]]

.END
.END
.<<PROBLEMS ON POLY EVAL AND THE GREAT MOTHERS>>
.SELECT 1;
.REQUIRE "PROB6.PUB"SOURCE_FILE;
.<<PROVING PROPERTIES>>
.NEXT PAGE;
.SS(Proving properties of programs,,P15:)
.BEGIN TURN ON "←","#";
People are becoming increasingly aware of the importance of giving 
convincing
arguments for such things as the correctness or equivalence of programs. These are
both very difficult enterprises. We will only sketch a proof
of a simple property of two programs and leave others as problems for the 
interested reader.

Using the definition of %3append%* given on {yon(P22)} and the definition
of %3reverse%* given on {yon(P23)}, we wish to show that:

←%3append[reverse[y];reverse[x]] = reverse[append[x;y]]%*,

for any lists, %3x%*, and %3y%*. The induction will be on the structure of %3x%*.

.BEGIN TABIT3(5,10,15);
\|%2Basis%*: %3x%* is %3NIL%*.
\|We must thus show: %3append[reverse[y];NIL] = reverse[append[NIL;y]]%*
\|But: %3reverse[append[NIL;y]] = reverse[y]%*  by the def. of %3append%*.
\|We now establish the stronger result:  %3append[z;NIL] = z%*
\\|%2Basis:%* %3z%* is %3NIL%*.
\\|Show %3append[NIL;NIL] = NIL%*. Easy.
\\|%2Induction step:%* Assume the lemma for lists, %3z%*, of length n;
\\|Prove: %3append[cons[x;z];NIL] = cons[x;z]%*.
\\|Since %3cons[...]%* is not %3NIL%*, then applying the definition of %3append%*
\\|says we must prove: %3cons[x;append[z;NIL]] = cons[x;z]%*.
\\|But an application of the induction hypothesis gives our result.
\|So the Basis for our main result is established.
\|%2Induction step:%* Assume the result for lists, %3z%*, of length n;
\|Prove:
\|←   (1)   %3append[reverse[y];reverse[cons[x;z]]] = reverse[append[cons[x;z];y]%*
\| Using the definition of %3reverse%* on the LHS of (1) gives:
\|←   (2)   %3append[reverse[y];append[reverse[z];list[x]]]%*.
\| Using the definition of %3append%* on the RHS of (1) gives:
\|←   (3)   %3reverse[cons[x;append[z;y]].%*
\| Using the definition of %3reverse%* on (3) gives:
\|←   (4)   %3append[reverse[append[z;y];list[x]]].%*
\| Using our induction hypothesis on (4) gives:
\|←   (5)   %3append[append[reverse[y];reverse[z]];list[x]]%*
\| Thus we must establish that    (2) = (5).
\| But this is just an instance of the associativity of %3append%*:
\|←%3append[x;append[y;z]] = append[append[x;y];z].%*  (See problem I, below.)
.END

The structure of the proof is analogous to proofs by mathematical 
induction in elementary number theory. The ability to perform such proofs
is a direct consequence of our careful definition of data structures.
Examination of the proof will show that there is a close relationship
between what we are inducting on in the proof and what  we are recurring on
during the evaluation of the  expressions. A program
written by Boyer and Moore has been reasonably successful in generating
proofs like the above by exploiting this relationship.

←%2Problems%*

I. Prove the associativity of %3append%*.

II Analysis of the above proof shows frequent use of other results for LISP
functions. Fill in the details. Investigate the possibility of formalizing
this proof, showing what axioms are needed.

III Show the equivalence of %3fact%* ({yon(P20)}) and %3fact%41%1 ({yon(P21)}).

IV Show the equivalence of %3length%* and %3length%41%1 ({yon(P19)}).

V  Using the definition of %3reverse%*, given on {yon(P16)}, prove:

←%3reverse[reverse[x]] = x%*.

VI  Prove that the construction %3atom[x] → eq[x;y]%* is equivalent to
%3atom[x]#→#[eq[x;y]#→T;#T#→#NIL]%*. See {yon(P24)}.
.END